110 research outputs found

    The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1/k21/k^{2}

    Full text link
    The {\it forward-backward algorithm} is a powerful tool for solving optimization problems with a {\it additively separable} and {\it smooth} + {\it nonsmooth} structure. In the convex setting, a simple but ingenious acceleration scheme developed by Nesterov has been proved useful to improve the theoretical rate of convergence for the function values from the standard O(k1)\mathcal O(k^{-1}) down to O(k2)\mathcal O(k^{-2}). In this short paper, we prove that the rate of convergence of a slight variant of Nesterov's accelerated forward-backward method, which produces {\it convergent} sequences, is actually o(k2)o(k^{-2}), rather than O(k2)\mathcal O(k^{-2}). Our arguments rely on the connection between this algorithm and a second-order differential inclusion with vanishing damping

    Dynamical systems and forward-backward algorithms associated with the sum of a convex subdifferential and a monotone cocoercive operator

    Full text link
    In a Hilbert framework, we introduce continuous and discrete dynamical systems which aim at solving inclusions governed by structured monotone operators A=Φ+BA=\partial\Phi+B, where Φ\partial\Phi is the subdifferential of a convex lower semicontinuous function Φ\Phi, and BB is a monotone cocoercive operator. We first consider the extension to this setting of the regularized Newton dynamic with two potentials. Then, we revisit some related dynamical systems, namely the semigroup of contractions generated by AA, and the continuous gradient projection dynamic. By a Lyapunov analysis, we show the convergence properties of the orbits of these systems. The time discretization of these dynamics gives various forward-backward splitting methods (some new) for solving structured monotone inclusions involving non-potential terms. The convergence of these algorithms is obtained under classical step size limitation. Perspectives are given in the field of numerical splitting methods for optimization, and multi-criteria decision processes.Comment: 25 page

    Asymptotic behavior of gradient-like dynamical systems involving inertia and multiscale aspects

    Full text link
    In a Hilbert space H\mathcal H, we study the asymptotic behaviour, as time variable tt goes to ++\infty, of nonautonomous gradient-like dynamical systems involving inertia and multiscale features. Given H\mathcal H a general Hilbert space, Φ:HR\Phi: \mathcal H \rightarrow \mathbb R and Ψ:HR\Psi: \mathcal H \rightarrow \mathbb R two convex differentiable functions, γ\gamma a positive damping parameter, and ϵ(t)\epsilon (t) a function of tt which tends to zero as tt goes to ++\infty, we consider the second-order differential equation x¨(t)+γx˙(t)+Φ(x(t))+ϵ(t)Ψ(x(t))=0.\ddot{x}(t) + \gamma \dot{x}(t) + \nabla \Phi (x(t)) + \epsilon (t) \nabla \Psi (x(t)) = 0. This system models the emergence of various collective behaviors in game theory, as well as the asymptotic control of coupled nonlinear oscillators. Assuming that ϵ(t)\epsilon(t) tends to zero moderately slowly as tt goes to infinity, we show that the trajectories converge weakly in H\mathcal H. The limiting equilibria are solutions of the hierarchical minimization problem which consists in minimizing Ψ\Psi over the set CC of minimizers of Φ\Phi. As key assumptions, we suppose that 0+ϵ(t)dt=+ \int_{0}^{+\infty}\epsilon (t) dt = + \infty and that, for every pp belonging to a convex cone C\mathcal C depending on the data Φ\Phi and Ψ\Psi 0+[Φ(ϵ(t)p)σC(ϵ(t)p)]dt<+ \int_{0}^{+\infty} \left[\Phi^* \left(\epsilon (t)p\right) -\sigma_C \left(\epsilon (t)p\right)\right]dt < + \infty where Φ\Phi^* is the Fenchel conjugate of Φ\Phi, and σC\sigma_C is the support function of CC. An application is given to coupled oscillators

    Asymptotic behavior of coupled dynamical systems with multiscale aspects

    Get PDF
    We study the asymptotic behavior, as time t goes to infinity, of nonautonomous dynamical systems involving multiscale features. These systems model the emergence of various collective behaviors in game theory, as well as the asymptotic control of coupled sytems.Comment: 20 page

    Fast convex optimization via inertial dynamics with Hessian driven damping

    Full text link
    We first study the fast minimization properties of the trajectories of the second-order evolution equation x¨(t)+αtx˙(t)+β2Φ(x(t))x˙(t)+Φ(x(t))=0,\ddot{x}(t) + \frac{\alpha}{t} \dot{x}(t) + \beta \nabla^2 \Phi (x(t))\dot{x} (t) + \nabla \Phi (x(t)) = 0, where Φ:HR\Phi:\mathcal H\to\mathbb R is a smooth convex function acting on a real Hilbert space H\mathcal H, and α\alpha, β\beta are positive parameters. This inertial system combines an isotropic viscous damping which vanishes asymptotically, and a geometrical Hessian driven damping, which makes it naturally related to Newton's and Levenberg-Marquardt methods. For α3\alpha\geq 3, β>0\beta >0, along any trajectory, fast convergence of the values Φ(x(t))minHΦ=O(t2)\Phi(x(t))- \min_{\mathcal H}\Phi =\mathcal O\left(t^{-2}\right) is obtained, together with rapid convergence of the gradients Φ(x(t))\nabla\Phi(x(t)) to zero. For α>3\alpha>3, just assuming that Φ\Phi has minimizers, we show that any trajectory converges weakly to a minimizer of Φ\Phi, and Φ(x(t))minHΦ=o(t2) \Phi(x(t))-\min_{\mathcal H}\Phi = o(t^{-2}). Strong convergence is established in various practical situations. For the strongly convex case, convergence can be arbitrarily fast depending on the choice of α\alpha. More precisely, we have Φ(x(t))minHΦ=O(t23α)\Phi(x(t))- \min_{\mathcal H}\Phi = \mathcal O(t^{-\frac{2}{3}\alpha}). We extend the results to the case of a general proper lower-semicontinuous convex function Φ:HR{+}\Phi : \mathcal H \rightarrow \mathbb R \cup \{+\infty \}. This is based on the fact that the inertial dynamic with Hessian driven damping can be written as a first-order system in time and space. By explicit-implicit time discretization, this opens a gate to new - possibly more rapid - inertial algorithms, expanding the field of FISTA methods for convex structured optimization problems

    A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions

    Full text link
    In a general Hilbert framework, we consider continuous gradient-like dynamical systems for constrained multiobjective optimization involving non-smooth convex objective functions. Our approach is in the line of a previous work where was considered the case of convex di erentiable objective functions. Based on the Yosida regularization of the subdi erential operators involved in the system, we obtain the existence of strong global trajectories. We prove a descent property for each objective function, and the convergence of trajectories to weak Pareto minima. This approach provides a dynamical endogenous weighting of the objective functions. Applications are given to cooperative games, inverse problems, and numerical multiobjective optimization
    corecore